Cutting-edge technologies, and particularly three-dimensional digital modeling, are transforming architectural history, especially in the area of premodern architecture. Gone are the days of hand-drawn elevations and sketchy illustrations of vanished monuments: by integrating laser scans and photogrammetry with graphic images and written documents, three-dimensional digital modeling can produce the most evocative illustrations of architecture to date. In the past decade or so, architectural historians have turned to such modeling as a means of reverse engineering parts of buildings as well as whole buildings to visually examine change over time. The innovations of 3D modeling are particularly generative for research on the medieval European built environment, whose corpus has been significantly altered and diminished by time. In addition to offering new insights from the fragments of old or vanished structures, these technologies allow for analyses that permit a new way of seeing and interacting with materials, interior spaces, urban environments, and the phenomenology of design.1Projects in the field of medieval architecture that have engaged in these technologies have only begun to explore the new modes of communication they afford. The tendency to date has been to represent the digital historical model as a fait accompli through photorealistic digital illustrations or visualizations, the latter of which can represent both the end point of the digital modeling project and its data in a single interactive form.2 Too often, however, digitally produced images effect an “enchantment” that disguises the researcher’s visual argument.3 The stunning illusionism of the images obfuscates the underlying evidence and the extent to which it has been interpreted, specifically in its adaptation to three-dimensional computer-based geometries. Indeed, clear articulation of the level or percentage of certainty in a visualization of a digital historical reconstruction is becoming an increasingly critical issue in the practice.4 The three first-generation digital historical reconstruction projects for medieval architecture discussed here—the work published in Notre-Dame Cathedral: Nine Centuries of History, by Dany Sandron and the late Andrew Tallon, and the projects titled St Stephen’s Chapel, Westminster: Visual and Political Culture, 1292–1941 and Visualizing Venice—which represent only a tiny fraction of innumerable similar projects that have been undertaken, all engage in these questions.5 As pioneering studies in the field, initiated about a decade ago, these three projects were among the first to encounter the new challenges and rewards of this technology for medieval architectural history.6 In the process, they confronted myriad questions, such as, How and to what degree can a historical structure be translated to a virtual environment? Can a virtual reconstruction based on partial evidence be accurate? Is direct extrapolation based on fragments or partial evidence factual? How should the research and technological limits be articulated? The projects discussed here applied different solutions to these issues, and they serve as benchmarks for the paradigm shift currently taking place in architectural history, one that challenges academia to convey research in a more transparent, interactive, and discursive way than ever before.In Notre-Dame Cathedral: Nine Centuries of History, Sandron and Tallon present a series of brilliant color illustrations of the Parisian cathedral that articulate its phases of construction from 1163 to 2013, the year of the book’s initial publication in French.7 A pithy text accompanies the images and describes the construction sequence along with related topics, such as the cathedral’s funding program and furnishings. It is, however, easy to overlook this encyclopedic commentary given the predominance of the imagery, which includes digital reconstructions hovering at the top of nearly every page, as well as numerous supporting photographs.With crisp lines and photorealistic colors and textures, the digital illustrations in Notre-Dame Cathedral elaborate a persuasive interpretation of the building’s transformations. As all of the illustrations are rendered with the same standards of detail and clarity, they are particularly authoritative. Evidence for the digital reconstruction is referred to in the text and/or offered through printed photographs arranged near the illustrations, but not explicitly discussed in relation to the model. Tallon’s groundbreaking laser scans of Notre-Dame, several mesmerizing images of which are presented, appear to validate the digital reconstruction, although they do not consistently inform the illustrations.8 In areas of the cathedral that have been the subject of debate, such as the flying buttresses of the choir, the images of the historical digital reconstruction represent the authors’ hypotheses with the same amount of clarity, texture, and color as all the other images.9 Also illustrated are completely fabricated details, such as wood arch centerings, scaffolding, piles of ashlar, rubble, and timber, that support the idea of the cathedral as a work in progress.10 Even if these details are obviously imagined, they function as “reality effects” and add credibility to the suggested progression of the cathedral’s construction history.11Such computer-based illustrations rely on the assumptions of traditional publication practices, for which anything drawn, whether by hand or computer, is understood to be an interpretation and/or to have some degree of error. But because computational methods can model hypothetical interpretations with photorealistic authority, such images can be misleading. The display of numerous evidential photographs along with text in Notre-Dame Cathedral is a solution that represents a tacit recognition of and initial response to this concern.The digital reconstruction project titled St Stephen’s Chapel, Westminster: Visual and Political Culture, 1292–1941, officially launched in 2013 by Tim Ayers, John Cooper, and Miles Taylor, took a multifaceted approach to this problem. St Stephen’s Chapel, a double-level royal chapel, stood within Westminster Palace in London from 1292 until the destruction of its upper level by fire in 1834. The site is currently occupied by the Houses of Parliament, with the heavily restored lower chapel known as St Mary Undercroft and the reconstructed upper level, which served as the first meeting room for the House of Commons, now known as St Stephen’s Hall. An elaborate website hosts a stunning visualization of the upper chapel, replete with polychromy.12 The website explains that the digital reconstruction is an imaginative representation of the chapel based on the researchers’ interpretation of primary and secondary source evidence. Offering options to explore the chapel’s life over seven time periods, the pages include digital visualizations, illustrations, and source images. The “Medieval Chapel, 1360s” tab offers the viewer the opportunity to interact with a three-dimensional panorama of the digitally reconstructed chapel and to view stills of it in the context of the changing palace, itself represented in transparent white. The three-dimensional panorama of the upper chapel sets the viewpoint at approximately eye level, and the viewer can move the angle and the camera forward and backward within the interior space finished with (virtual) sculpture, a rood screen, and stained glass. Throughout the virtual experience, information buttons pop open to describe the evidence concerning the different materials, furnishings, and parts of the model (such as the type of stone used, the timber screen, the stained glass). The Virtual St Stephen’s website balances the experience of the visualization with superimposed descriptions of the evidence underlying it, which effectively underscore the visualization as an argument.A number of publications associated with this project offer more thorough analysis of the sources on which the researchers relied and the choices they made. Unlike the fragmented evidence remaining for the majority of Gothic buildings, the documentation for St Stephen’s is copious, and the different types of information produced during the chapel’s long life (graphic, lithic, documentary) include the complete account rolls for the building’s initial construction from 1292 to 1366. At more than 640 feet long, these surviving rolls are an exceptional rarity. Tim Ayers and Maureen Jurkowski have already produced a critical edition of those accounts that offers rich information concerning the chapel’s makers, its economics, and the different phases of its initial construction period.13 In a section dedicated to the St Stephen’s project in the online journal British Art Studies, Ayers, as the principal medievalist art historian on the project, describes how he made use of the sources with respect to the 1360s model, in addition to explaining the alternative possibilities and remaining questions that the process was unable to answer.14 For example, the team completed lacunae in the iconography, composition, and style of the stained-glass windows by adapting elements from other structures contemporary to the chapel and in consideration of its interior space, such as the timber screen. In the face of conflicting information, such as that concerning the clerestory, the researchers offered their own solution based on iterative analyses in dialogue with earlier graphic illustrations. In another section of British Art Studies, the authors explain how the modeler worked with the academic team of humanists to re-create St Stephen’s as accurately as possible given the limits of the technology and even human perception.15 For example, while the colors for the polychromy were translated to numerical RGB codes, they appear different depending on the computer screen projecting them. Indeed, in some cases the pixelated pigments lack sharpness and tend to flatten the panorama. Additional research and publications by various members of the team—including work on the history of the canons, the transformations to it over time, and its historical and architectural contexts and significance—are ongoing and forthcoming.Like the project described in Notre-Dame Cathedral, the St Stephen’s project abides by traditional academic practices in that it presents visualizations of completed research that are explained primarily in external publications. The approach privileges the authors’ interpretations over the transparent visual display of the evidence and questions surrounding the visualization itself. It would be impossible to identify where the authors took interpretive license unless one either read all the publications or completely re-created the project to fully understand the limits of the three-dimensional evidence of the chapel. Although the Virtual St Stephen’s project exemplifies the highest standards of digital visualization practices in academia to date by pointing to sources and describing decisions online and in print, digitally based visualizations should transcend the presentation of an argument, because the iterative possibilities of the digital environment afford a multilayered and discursive display.In responding to such concerns, Visualizing Venice, a digitally born project that formally began in 2010 and developed into a long-standing collaboration among Duke University, Università degli Studi di Padova, and Università IUAV di Venezia, has taken a more explicitly experimental perspective. Unlike the other projects considered here, Visualizing Venice is a group of researchers who engage in mapping, 3D modeling, and multimedia analyses to better understand architectural, urban, and historical change in Venice. Case studies unite teams in fields ranging from art history to engineering that ask different field-specific questions of one site at a time. For example, in an early case study initiated in 2010, the project team illustrated urban change in the area of SS. Giovanni e Paolo and the adjacent Scuola Grande di San Marco in Venice.16 One group tracked the progression of the buildings’ architectural and sculptural modifications over time. Another group examined how pathways and movement in the campo were related to the placement of sculpture and architectural articulation. A third group considered actual urban viewpoints as “reverse perspectives” in relation to perspectival views in paintings by Canaletto. Other case studies within the umbrella of the Visualizing Venice project have considered the long life of the Accademia, the Ghetto, and the theme of food and water in the city. Visualizing Venice functions as a research laboratory that brings new questions, skill sets, and technologies to the analysis of the urban history of the city. The evolving nature of the project allows team members to adapt their methods as new technologies emerge and as they explore new case studies.The anthology Visualizing Venice: Mapping and Modeling Time and Change in a City, published in 2018, describes the state of the research project and its insights rather than serves as a final output or a fixed argument.17 The book’s first section presents an account of the purpose of the broader project and explains the development of its research questions and visualization methods. The second part enumerates the range of case studies undertaken by the different teams, from unbuilt architecture to urban change and lost monuments. The third section details the tools and technology employed by the project up to the time of publication. This last section describes the means of making and representing visualizations in the most explicit and transparent terms to date for medieval architectural historians. Andrea Giordano and Mark Olson explain how researchers in the Visualizing Venice project have intentionally shifted away from photorealism and toward rendering models in a grayscale palette so that they seem “as if they were constructed of wet clay, malleable and not yet fixed.”18 This approach is conveyed in the anthology as well as on the Visualizing Venice website, which records past and current projects, and, significantly, shows models in grayscale rather than colorful visualizations. As a means to express both contingency and alternative hypotheses, Visualizing Venice adopted historical building information management (HBIM) software.19HBIM is increasingly the software of choice for historical reconstructions because its technology can integrate 2D materials with 3D sources and geometries in a virtual environment. This software thus can serve as the archive for a project as well as the locus of its visualization. Moreover, BIM/HBIM has the capacity to integrate related information and extract it from these different resources; for example, it can calibrate how materials will behave in a structure or generate plans and sections. According to its advocates, this type of software offers a means of enabling “semantically rich” historical models that contain information about a structure’s transformation over time, offering “a rich knowledge base for a project, a ‘thick description’ capturing and assembling data from different sources into a single and interoperable whole” with all information contained within it.20 With its ability to consolidate all the information related to a digital project in one space or file, HBIM has the potential to radically transform the traditional practice of academic argumentation for architectural history. Specifically, it is likely to shift the dissemination of research from written arguments based on calculated rhetoric stemming from the author’s perspective to an interactive experience of the authorial process, thereby giving the viewer new means to assess and understand an author’s conclusions.However, there is still a long way to go before HBIM software will be user-friendly for most historians practicing digital modeling. In the first place, the technology has a steep learning curve that demands a substantial time investment as well as specialized practitioners (including engineers, depending on the project goals). Also, the software is attuned to contemporary architectural practices and does not readily create irregular or organic shapes, such as would be required for forms like rib vaults, detailed capitals, and variegated architectural articulation; such shapes translate into the program as complicated nonuniform rational B-splines, or NURBS.21 While HBIM can create highly nuanced color-coded graphic overlays that represent or distinguish source material from areas of interpretation, or different levels of (un)certainty, a clunky Windows-based interface integrates the text or numerical data on the back end with the three-dimensional model on the front end. At the moment, additional software, such as Sketchfab, and other new widgets are still required to allow the user to interact with the 2D data fluidly in the front end of a model. In fact, there is currently no single computational solution that offers intuitive and integrated interfaces for visualization and examination of evidence while maintaining a stable archive of finished outputs.22Thus neither technology nor theory has yet achieved the desired goals. Given the plurality of workarounds at the moment, such as those we have seen among the different digital projects discussed here, there is clearly a need for the creation of more decisive standards for digital historical reconstructions and visualizations in academia. The London Charter for the Computer-Based Visualisation of Cultural Heritage (version 1, 2006, and version 2, 2009) provides a baseline for the practice, particularly in the area of conservation and related practices, such as digital modeling of lost monuments. The charter advocates for the documentation and dissemination of visualization projects to allow the “methods and outcomes to be understood and evaluated in relation to the contexts and purposes for which they are deployed.”23 However, the means to that end are not specifically outlined, so there remains a lack of standardization in the output, as we have seen.24 Output standards for digital visualizations and modeling would facilitate legibility, enable comparisons, and set benchmarks for the assessment of relative merits. This is, after all, not dissimilar to the way in which text-based academic publications are validated—through formatting, style, footnotes, bibliographic information, and the peer-review process. A standard means of presentation for digital visualizations would facilitate their accessibility and assessment in academia.Although HBIM cannot produce an ideal academic visualization right now, technology’s wheels are turning toward greater interactivity, both with the virtual space on the screen and through the development of more human-centered approaches. Signs of this direction include augmented reality, which allows for the display of a virtual historical model or image over a live camera feed on a mobile device, and virtual reality, which offers an immersive experience in a virtual environment; both of these are increasingly employed in cultural heritage sectors. It is hard to imagine, but development of the metaverse may provide another way to experience virtual reproductions and interpretations of historical sites and spaces, and may eventually become itself part of the historical record. For scholars, the ability to see and virtually inhabit historical monuments from these unprecedented perspectives promises new opportunities for exploration and understanding. While the current expressive limitations of the technology still apply in those formats, the drive toward interactivity may facilitate the means to make digital models more transparent.The many benefits of digital historical modeling and visualization continue to expand with the evolution of technology and the introduction of new projects.25 In terms of production, the early projects discussed here demonstrate that the digital approach to architectural history invites collaborative work, with teams that often include humanists in fields such as history, literature, and musicology, in addition to graphic artists and computer and structural engineers. Such teams may also include both graduate and undergraduate students, who can learn about their subjects through the application of these digital means. The benefits of team-based research are well known: groups can reach important new insights when members bring different perspectives and levels of knowledge and engage in the rigorous debate of ideas. Such a working method not only expedites the project but also expands the questions that can be asked of it. The idea of knowledge disseminated primarily from those at the top is fast giving way to approaches that include a wider scope of contributors.Once digital historical reconstructions can be articulated as multilayered three-dimensional digital collages that manifest and distinguish primary, secondary, and hypothetical sources, these will also provide a new means of presenting scholarly work that explicitly shows the “levels of certainty” and makes it easier for viewers to engage and assess them. This level of explicitness may also be quantifiable. While such a notion may be anathema to most humanists today, the ability to objectively determine the validity of an argument would not detract from the discursive value of that argument, but rather make it easier to understand and engage. This more interactive and structured method of presentation will permit viewers to experience digital historical reconstructions in a nonlinear fashion and on their own terms. Such a transformation will shift the typical direction of information exchange in academia.Yet we might still question if these changes would allow a digital academic argument to stand on its own. Can the medium alone convey the message?26 As we know from the real world, historical structures cannot speak for themselves, even when their documentation and other sources are collected with them. A path, or a ductus, or indeed some kind of interlocutor who can communicate actions and intentions, is, on the contrary, ever essential.27Thus we still have a way to go before academia reaches the point of multifaceted presentation and networked communication. However, once digital historical reconstructions meet such standards, they may ultimately serve as primary (as well as secondary) resources. With a clearer and more rigorous approach to the levels of certainty within the fragmented history of architecture, three-dimensional digital historical models and their visualizations may enter into the corpus of architectural history. This will allow scholars to move beyond basic, unresolvable, open, or unanswered questions and consider hypothetical arguments that will bring new insights and inquiries to the field.